Optimal Lipschitzian selection operator in quasi-convex optimization

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Towards Optimal Sparse Inverse Covariance Selection through Non-Convex Optimization

We study the problem of reconstructing the graph of a sparse Gaussian Graphical Model from independent observations, which is equivalent to finding non-zero elements of an inverse covariance matrix. For a model of size p and maximum degree d, the information theoretic lower bound requires that the number of samples needed for recovering the graph perfectly is at least d log p/κ, where κ is the ...

متن کامل

A Quasi-Newton Approach to Nonsmooth Convex Optimization A Quasi-Newton Approach to Nonsmooth Convex Optimization

We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-reg...

متن کامل

Adaptive Operator Selection for Optimization

Evolutionary Algorithms (EAs) are stochastic optimization algorithms which have already shown their efficiency on many application domains. This is achieved mainly due to the many parameters that can be defined by the user according to the problem at hand. However, the performance of EAs is very sensitive to the setting of these parameters, and there are no general guidelines for an efficient s...

متن کامل

Proximal Quasi-Newton Methods for Convex Optimization

In [19], a general, inexact, e cient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established. In this paper, we analyze the convergence properties of this method, both in the exact and inexact setting, in the case when the objective function is strongly convex. We also investigate a practical variant of t...

متن کامل

Beyond Convexity: Stochastic Quasi-Convex Optimization

Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Mathematical Analysis and Applications

سال: 1987

ISSN: 0022-247X

DOI: 10.1016/0022-247x(87)90140-5